96 research outputs found
Covering convex bodies by cylinders and lattice points by flats
In connection with an unsolved problem of Bang (1951) we give a lower bound
for the sum of the base volumes of cylinders covering a d-dimensional convex
body in terms of the relevant basic measures of the given convex body. As an
application we establish lower bounds on the number of k-dimensional flats
(i.e. translates of k-dimensional linear subspaces) needed to cover all the
integer points of a given convex body in d-dimensional Euclidean space for
0<k<d
An Improved BKW Algorithm for LWE with Applications to Cryptography and Lattices
In this paper, we study the Learning With Errors problem and its binary
variant, where secrets and errors are binary or taken in a small interval. We
introduce a new variant of the Blum, Kalai and Wasserman algorithm, relying on
a quantization step that generalizes and fine-tunes modulus switching. In
general this new technique yields a significant gain in the constant in front
of the exponent in the overall complexity. We illustrate this by solving p
within half a day a LWE instance with dimension n = 128, modulus ,
Gaussian noise and binary secret, using
samples, while the previous best result based on BKW claims a time
complexity of with samples for the same parameters. We then
introduce variants of BDD, GapSVP and UniqueSVP, where the target point is
required to lie in the fundamental parallelepiped, and show how the previous
algorithm is able to solve these variants in subexponential time. Moreover, we
also show how the previous algorithm can be used to solve the BinaryLWE problem
with n samples in subexponential time . This
analysis does not require any heuristic assumption, contrary to other algebraic
approaches; instead, it uses a variant of an idea by Lyubashevsky to generate
many samples from a small number of samples. This makes it possible to
asymptotically and heuristically break the NTRU cryptosystem in subexponential
time (without contradicting its security assumption). We are also able to solve
subset sum problems in subexponential time for density , which is of
independent interest: for such density, the previous best algorithm requires
exponential time. As a direct application, we can solve in subexponential time
the parameters of a cryptosystem based on this problem proposed at TCC 2010.Comment: CRYPTO 201
On Basing Search SIVP on NP-Hardness
The possibility of basing cryptography on the minimal assumption NPBPP is at the very heart of complexity-theoretic cryptography. The closest we have gotten so far is lattice-based cryptography whose average-case security is based on the worst-case hardness of approximate shortest vector problems on integer lattices. The state-of-the-art is the construction of a one-way function (and collision-resistant hash function) based on the hardness of the -approximate shortest independent vector problem .
Although SIVP is NP-hard in its exact version, Guruswami et al (CCC 2004) showed that is in NPcoAM and thus unlikely to be NP-hard. Indeed, any language that can be reduced to (under general probabilistic polynomial-time adaptive reductions) is in AMcoAM by the results of Peikert and Vaikuntanathan (CRYPTO 2008) and Mahmoody and Xiao (CCC 2010). However, none of these results apply to reductions to search problems, still leaving open a ray of hope: can NP be reduced to solving search SIVP with approximation factor ?
We eliminate such possibility, by showing that any language that can be reduced to solving search with any approximation factor lies in AM intersect coAM. As a side product, we show that any language that can be reduced to discrete Gaussian sampling with parameter lies in AM intersect coAM
Better Algorithms for LWE and LWR
The Learning With Error problem (LWE) is becoming more and more used in cryptography, for instance, in the design of some fully homomorphic encryption schemes. It is thus of primordial importance to find the best algorithms that might solve this problem so that concrete parameters can be proposed. The BKW algorithm was proposed by Blum et al. as an algorithm to solve the Learning Parity with Noise problem (LPN), a subproblem of LWE. This algorithm was then adapted to LWE by Albrecht et al. In this paper, we improve the algorithm proposed by Albrecht et al. by using multidimensional Fourier transforms. Our algorithm is, to the best of our knowledge, the fastest LWE solving algorithm. Compared to the work of Albrecht et al. we greatly simplify the analysis, getting rid of integrals which were hard to evaluate in the final complexity. We also remove some heuristics on rounded Gaussians. Some of our results on rounded Gaussians might be of independent interest. Moreover, we also analyze algorithms solving LWE with discrete Gaussian noise. Finally, we apply the same algorithm to the Learning With Rounding problem (LWR) for prime q, a deterministic counterpart to LWE. This problem is getting more and more attention and is used, for instance, to design pseudorandom functions. To the best of our knowledge, our algorithm is the first algorithm applied directly to LWR. Furthermore, the analysis of LWR contains some technical results of independent interest
Majorations explicites de fonctions de Hilbert-Samuel géométrique et arithmétique
International audienceBy using the -filtration approach of Arakelov geometry, one establishes explicit upper bounds for geometric and arithmetic Hilbert-Samuel function for line bundles on projective varieties and hermitian line bundles on arithmetic projective varieties
Quantum FHE (Almost) As Secure As Classical
Fully homomorphic encryption schemes (FHE) allow to apply arbitrary efficient computation to encrypted data without decrypting it first. In Quantum FHE (QFHE) we may want to apply an arbitrary quantumly efficient computation to (classical or quantum) encrypted data.
We present a QFHE scheme with classical key generation (and classical encryption and decryption if the encrypted message is itself classical) with comparable properties to classical FHE. Security relies on the hardness of the learning with errors (LWE) problem with polynomial modulus, which translates to the worst case hardness of approximating short vector problems in lattices to within a polynomial factor. Up to polynomial factors, this matches the best known assumption for classical FHE. Similarly to the classical setting, relying on LWE alone only implies leveled QFHE (where the public key length depends linearly on the maximal allowed evaluation depth). An additional circular security assumption is required to support completely unbounded depth. Interestingly, our circular security assumption is the same assumption that is made to achieve unbounded depth multi-key classical FHE.
Technically, we rely on the outline of Mahadev (arXiv 2017) which achieves this functionality by relying on super-polynomial LWE modulus and on a new circular security assumption. We observe a connection between the functionality of evaluating quantum gates and the circuit privacy property of classical homomorphic encryption. While this connection is not sufficient to imply QFHE by itself, it leads us to a path that ultimately allows using classical FHE schemes with polynomial modulus towards constructing QFHE with the same modulus
Algebraic Techniques for Short(er) Exact Lattice-Based Zero-Knowledge Proofs
A key component of many lattice-based protocols is a zero-knowledge proof of knowledge of a vector with small coefficients satisfying . While there exist fairly efficient proofs for a relaxed version of this equation which prove the knowledge of and satisfying where and is some small element in the ring over which the proof is performed, the proofs for the exact version of the equation are considerably less practical. The best such proof technique is an adaptation of Stern\u27s protocol (Crypto \u2793), for proving knowledge of nearby codewords, to larger moduli. The scheme is a -protocol, each of whose iterations has soundness error , and thus requires over repetitions to obtain soundness error of , which is the main culprit behind the large size of the proofs produced.
In this paper, we propose the first lattice-based proof system that significantly outperforms Stern-type proofs for proving knowledge of a short satisfying . Unlike Stern\u27s proof, which is combinatorial in nature, our proof is more algebraic and uses various relaxed zero-knowledge proofs as sub-routines. The main savings in our proof system comes from the fact that each round has soundness error of , where is the number of columns of . For typical applications, is a few thousand, and therefore our proof needs to be repeated around times to achieve a soundness error of . For concrete parameters, it produces proofs that are around an order of magnitude smaller than those produced using Stern\u27s approach
Zero-Knowledge Arguments for Matrix-Vector Relations and Lattice-Based Group Encryption
International audienceGroup encryption (GE) is the natural encryption analogue of group signatures in that it allows verifiably encrypting messages for some anonymous member of a group while providing evidence that the receiver is a properly certified group member. Should the need arise, an opening authority is capable of identifying the receiver of any ciphertext. As introduced by Kiayias, Tsiounis and Yung (Asiacrypt'07), GE is motivated by applications in the context of oblivious retriever storage systems, anonymous third parties and hierarchical group signatures. This paper provides the first realization of group encryption under lattice assumptions. Our construction is proved secure in the standard model (assuming interaction in the proving phase) under the Learning-With-Errors (LWE) and Short-Integer-Solution (SIS) assumptions. As a crucial component of our system, we describe a new zero-knowledge argument system allowing to demonstrate that a given ciphertext is a valid encryption under some hidden but certified public key, which incurs to prove quadratic statements about LWE relations. Specifically, our protocol allows arguing knowledge of witnesses consisting of X ∈ Z m×n q , s ∈ Z n q and a small-norm e ∈ Z m which underlie a public vector b = X · s + e ∈ Z m q while simultaneously proving that the matrix X ∈ Z m×n q has been correctly certified. We believe our proof system to be useful in other applications involving zero-knowledge proofs in the lattice setting
Lattice-based Zero-knowledge SNARGs for Arithmetic Circuits
Succinct non-interactive arguments (SNARGs) enable verifying NP computations
with substantially lower complexity than that required
for classical NP verification. In this work, we construct a zero-knowledge
SNARG candidate that relies only on lattice-based assumptions
which are claimed to hold even in the presence of quantum computers.
Central to this new construction is the notion of linear-targeted malleability introduced
by Bitansky et al. (TCC 2013) and the conjecture that
variants of Regev encryption satisfy this property. Then, using
the efficient characterization of NP languages as
Square Arithmetic Programs we build the first quantum-resilient zk-SNARG for
arithmetic circuits with a constant-size proof consisting
of only 2 lattice-based ciphertexts.
Our protocol is designated-verifier, achieves zero-knowledge and has shorter
proofs and shorter CRS than the previous such schemes, e.g. Boneh et al.
(Eurocrypt 2017)
- …